5 research outputs found

    A Lattice-based Publish-Subscribe Communication Protocol using Accelerated Homomorphic Encryption Primitives

    Get PDF
    Key-policy attribute-based encryption scheme (KP-ABE) uses a set of attributes as public keys for encryption. It allows homomorphic evaluation of ciphertext into another ciphertext of the same message, which can be decrypted if a certain access policy based on the attributes is satisfied. A lattice-based KP-ABE scheme is reported in several works in the literature, and its software implementation is available in an open-source library called PALISADE. However, as the cryptographic primitives in KP-ABE are overly involved, non-trivial hardware acceleration is needed for its adoption in practical applications. In this work, we provide GPU-based algorithms for accelerating KP-ABE encryption and homomorphic evaluation functions seamlessly integrated into the open-source library with minor additional build changes needed to run the GPU kernels. Using GPU algorithms, we perform both homomorphic encryption and homomorphic evaluation operations 2.1× and 13.2× faster than the CPU implementations reported in the literature on an Intel i9, respectively. Furthermore, our implementation supports up to 128 attributes for encryption and homomorphic evaluation with fixed and changing access policies. Unlike the reported GPU-based homomorphic operations in the literature, which support only up to 32 attributes and give estimations for a higher number of attributes. We also propose a GPU-based KP-ABE scheme for publish/subscribe messaging applications, in which end-to-end security of the messages is guaranteed. Here, while the exchanged messages are encrypted with as many as 128 attributes by publishers, fewer attributes are needed for homomorphic evaluation. Our fast and memory-efficient GPU implementations of KP-ABE encryption and homomorphic evaluation operations demonstrate that the KP-ABE scheme can be used for practicable publish/subscribe messaging applications

    A high performance CPU-GPU database for streaming data analysis

    No full text
    The outstanding spread of database management system architectures in the last decade, plus the increasing growth, volume, and velocity of the data, which is known nowadays as “Big Data”, are continuously urging researchers, businessmen and companies to build robust and scalable database management systems (DBMS) and improve them in a way they adjust smoothly with the evolution of data. On the other hand, there is a tendency to support the conventional processing units (PUs), which are the Central Processing Units (CPUs), with additional computing power like the emerging Graphical Processing Units (GPUs). The research community has accepted the potential of vigorous computing power for data-intensive applications. Several research studies were conducted in the last years that ended up in building remarkable DBMSs by integrating GPUs and using them according to different workload distribution algorithms and query optimization protocols. Thus, we try to address a new approach by building a hybrid columnar-based highperformance database management system calling it DOLAP which adopts the Online Analytical Processing (OLAP) infrastructure. Distinctively from previous hybrid DBMSs, our database, DOLAP, depends on Bloom filters while performing different operations on data (ingesting, checking, modifying, and deleting). We implement this probabilistic data structure in DOLAP to prevent unnecessary memory accesses while checking the database’s data records. This method is proved to be useful by reducing the total running times by 35%. Moreover, since there exist two main PUs with different characteristics, the CPU and GPU, a workload distribution model that effectively decides the query’s executing unit at a time T should be defined to improve the efficiency of our system. Therefore, we suggested 3 load balancing models, the Random-based, Algorithm-based and the Improved Algorithmbased models. We run our tests on the Chicago Taxi Driver dataset taken from Kaggle and among the 3 load balancing models, the improved algorithm-based model demonstrates its effectiveness in well distributing the query load between the CPUs and GPUs where it outperforms the other models in nearly all the test run

    Fiziksel web uygulamalarına adanmış sunucu tasarımı

    No full text
    Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 2019Includes bibliographical references (leaves: 48-49)Text in English; Abstract: Turkish and EnglishWith the huge impressive technological improvements the world is witnessing where giants like Facebook, Google, Apple, Microsoft and other technology companies are offering different services to millions of clients, services which don’t take usually more than seconds to be within the users’ devices besides the Physical Web applications that makes things interacts, having entities and can be reached based on the proximity context without omitting the incoming IoT infrastructure that would make 20.4 billion devices connected by 2020, the amount of data transferred, and services provided will be enormous and along with that, the big energy consumer standing behind providing clients with the needed data and services instantly, the web servers. Although it has a magnificent performance and responds to billions of queries and requests, however, there is still a crucial point which must be highlighted, the remarkable amounts of energy consumption by these servers. Therefore, this work is proposing a new approach in order to reduce the energy consumption in such a scenario where the 18-core energy efficient computer Parallella board will be used in order to create an energy efficient server that can offer many services triggered by various devices or any ordinary web requests across any environment and to prove also that using a cluster of Parallella supercomputers may perform as other similar servers dealing with web content (e.g. Raspberry Pi server). We will show how would these boards work under low energy feeding where users can access a web content hosted on a Parallella cluster. The source codes of the project are available on GitHub.Facebook, Google, Apple, Microsoft ve diğer teknoloji şirketleri devlerin milyonlarca müşteriye farklı hizmetler sunmaktadır. Kullanıcıların içinde olmaları genellikle birkaç saniyeden uzun sürmeyen servislere tanıklık ediyor. Her şeyi etkileşime sokan, varlıklara sahip olan ve yakınlık bağlamına göre ulaşılabilen Fiziksel Web uygulamalarının yanı sıra, 2020'de bağlanan 20.4 milyar cihazı yapacak olan IoT altyapısını kaldırmadan, aktarılan veri miktarı ve sağlanan hizmetler çok büyük olacaktır. ve bununla birlikte, müşterilere anında ihtiyaç duyulan veri ve hizmetleri, web sunucularını sağlamanın arkasında duran büyük bir enerji tüketicisidir. Her ne kadar muhteşem bir performans gösterse ve milyarlarca soruyu ve HTTP talebi yanıtlıyor olsa da, bu sunucular tarafından dikkat çeken önemli miktarda enerji tüketimi vurgulanmalıdır. Bu nedenle, bu çalışma, enerji tüketimini azaltmak için 18 çekirdekli enerji verimli bilgisayar Parallella kartının çeşitli cihazlar tarafından tetiklenen birçok hizmeti sunabilecek enerji verimli bir sunucu oluşturmak için kullanılacağı bir senaryoda enerji tüketimini azaltmak için yeni bir yaklaşım önermektedir. Ayrıca, Parallella kümesini kullanılarak web içeriği ile ilgili diğer benzer web sunucuları gibi çalışabileceğini de kanıtlanmaktadır. Bu panoların, kullanıcıların Parallella kümesinde barındırılan bir web içeriğine erişebilecekleri düşük enerji beslemesi altında nasıl çalışacağını gösterilecektir. Projenin kaynak kodları GitHub'ta mevcuttur

    LSTM-AE for anomaly detection on multivariate telemetry data

    No full text
    Organizations and companies that collect data generated by sales, transactions, client/server communications, IoT nodes, devices, engines, or any other data generating/exchanging source, need to analyze this data to reveal insights about the running activities on their systems. Since streaming data has multivariate variables bearing dependencies among each other that extend temporally (to previous time steps).Long-Short Term Memory (LSTM) is a variant of the Recurrent Neural Networks capable of learning long-Term dependencies using previous timesteps of sequence-shape data. The LSTM model is a valid option to apply to our data for offline anomaly detection and help foresee future system incidents. Anything that negatively affects the system and the services provided via this system is considered an incident.Moreover, the raw input data might be noisy and improper for the model, leading to misleading predictions. A wiser choice is to use an LSTM Autoencoder (LSTM-AE) specialized for extracting meaningful features of the examined data and looking back several steps to preserve temporal dependencies.In our work, we developed two LSTM-AE models. We evaluated them in an industrial setup at Koçfinans (a finance company operating in Turkey), where they have a distributed system of several nodes running dozens of microservices. The outcome of this study shows that our trained LSTM-AE models succeeded in identifying the atypical behavior of offline data with high accuracies. Furthermore, after deploying the models, we identified the system failing at the exact times for the previous two reported failures. While after deployment, it launched cautions preceding the actual failure by a week, proving efficiency on online data. Our models achieved 99.7% accuracy and 89.1% as F1-score. Moreover, it shows potential in finding the proper LSTM-AE model architecture when time series data with temporal dependency property is fed to the model

    Machine learning-based load distribution and balancing in heterogeneous database management systems

    No full text
    For dynamic and continuous data analysis, conventional OLTP systems are slow in performance. Today's cutting-edge high-performance computing hardware, such as GPUs, has been used as accelerators for data analysis tasks, which traditionally leverage CPUs on classical database management systems (DBMS). When CPUs and GPUs are used together, the architectural heterogeneity, that is, leveraging hardware with different performance characteristics jointly, creates complex problems that need careful treatment for performance optimization. Load distribution and balancing are crucial problems for DBMSs working on heterogeneous architectures. In this work, focusing on a hybrid, CPU-GPU database management system to process users' queries, we propose heuristical and machine-learning-based (ML-based) load distribution and balancing models. In more detail, we employ multiple linear regression (MLR), random forest (RF), and Adaboost (Ada) models to dynamically decide the processing unit for each incoming query based on the response time predictions on both CPU and GPU. The ML-based models outperformed the other algorithms, as well as the CPU and GPU-only running modes with up to 27%, 29%, and 40%, respectively, in overall performance (response time) while answering intense real-life working scenarios. Finally, we propose to use a hybrid load-balancing model that would be more efficient than the models we tested in this work
    corecore